Skip to main content

Black Swan

⌗ Metadata

  • Author: Nassim Taleb
  • Tags: #non-fiction

📖 Short Summary (1 takeaway)

  • The things that makes the biggest impact to our society are black swan events which most economists, statisticians and decision makers do not account for in their mediocristan view

🧐 Why I am reading this book

  • Part of [[8. ☕️ Finer Things Book Club]] with Aaron

🙊 Great quotes

It illustrates a severe limitation to our learning from observations or experience and the fragility of our knowledge. One single observation can invalidate a general statement derived from millennia of confirmatory sightings of millions of white swans. All you need is one single (and, I am told, quite ugly) black bird. First, it is an outlier, as it lies outside the realm of regular expectations, because nothing in the past can convincingly point to its possibility. Second, it carries an extreme impact (unlike the bird). Third, in spite of its outlier status, human nature makes us concoct explanations for its occurrence after the fact, making it explainable and predictable. The central idea of this book concerns our blindness with respect to randomness, particularly the large deviations: Why do we, scientists or nonscientists, hotshots or regular Joes, tend to see the pennies instead of the dollars? The more unexpected the success of such a venture, the smaller the number of competitors, and the more successful the entrepreneur who implements the idea. The strategy is, then, to tinker as much as possible and try to collect as many Black Swan opportunities as you can. Almost everything in social life is produced by rare but consequential shocks and jumps; all the while almost everything studied about social life focuses on the “normal,” particularly with “bell curve” methods of inference that tell you close to nothing. Why? Because the bell curve ignores large deviations, cannot handle them, yet makes us confident that we have tamed uncertainty. Its nickname in this book is GIF, Great Intellectual Fraud. History is opaque. You see what comes out, not the script that produces events, the generator of history. The human mind suffers from three ailments as it comes into contact with history, what I call the triplet of opacity. They are: the illusion of understanding, or how everyone thinks he knows what is going on in a world that is more complicated (or random) than they realize; the retrospective distortion, or how we can assess matters only after the fact, as if they were in a rearview mirror (history seems clearer and more organized in history books than in empirical reality); and the overvaluation of factual information and the handicap of authoritative and learned people, particularly when they create categories—when they “Platonify.” History Does Not Crawl, It Jumps If I myself had to give advice, I would recommend someone pick a profession that is not scalable! A scalable profession is good only if you are successful; they are more competitive, produce monstrous inequalities, and are far more random, with huge disparities between efforts and rewards—a few can take a large share of the pie, leaving others out entirely at no fault of their own. Evolution is scalable: the DNA that wins (whether by luck or survival advantage) will reproduce itself, like a bestselling book or a successful record, and become pervasive. Other DNA will vanish. What we call “talent” generally comes from success, rather than its opposite. In Extremistan, inequalities are such that one single observation can disproportionately impact the aggregate, or the total. So while weight, height, and calorie consumption are from Mediocristan, wealth is not. Almost all social matters are from Extremistan. Look at the implication for the Black Swan. Extremistan can produce Black Swans, and does, since a few occurrences have had huge influences on history. This is the main idea of this book. Mistaking a naïve observation of the past as something definitive or representative of the future is the one and only cause of our inability to understand the Black Swan. From the standpoint of the turkey, the nonfeeding of the one thousand and first day is a Black Swan. For the butcher, it is not, since its occurrence is not unexpected. So you can see here that the Black Swan is a sucker’s problem. In other words, it occurs relative to your expectation. I call the narrative fallacy. (It is actually a fraud, but, to be more polite, I will call it a fallacy.) The fallacy is associated with our vulnerability to overinterpretation and our predilection for compact stories over raw truths. It severely distorts our mental representation of the world; it is particularly acute when it comes to the rare event. The narrative fallacy addresses our limited ability to look at sequences of facts without weaving an explanation into them, or, equivalently, forcing a logical link, an arrow of relationship, upon them. While narrativity comes from an ingrained biological need to reduce dimensionality, robots would be prone to the same process of reduction. Information wants to be reduced. If you work in a randomness-laden profession, as we see, you are likely to suffer burnout effects from that constant second-guessing of your past actions in terms of what played out subsequently. Keeping a diary is the least you can do in these circumstances. Stalin, who knew something about the business of mortality, supposedly said, “One death is a tragedy; a million is a statistic.” System 1, the experiential one, is effortless, automatic, fast, opaque (we do not know that we are using it), parallel-processed, and can lend itself to errors. It is what we call “intuition,” and performs these quick acts of prowess that became popular under the name blink, after the title of Malcolm Gladwell’s bestselling book. It produces shortcuts, called “heuristics,” that allow us to function rapidly and effectively. Dan Goldstein calls these heuristics “fast and frugal.” Others prefer to call them “quick and dirty.” Now, these shortcuts are certainly virtuous, since they are rapid, but, at times, they can lead us into some severe mistakes. System 2, the cogitative one, is what we normally call thinking. It is what you use in a classroom, as it is effortful (even for Frenchmen), reasoned, slow, logical, serial, progressive, and self-aware (you can follow the steps in your reasoning). It makes fewer mistakes than the experiential system, and, since you know how you derived your result, you can retrace your steps and correct them in an adaptive manner. The way to avoid the ills of the narrative fallacy is to favor experimentation over storytelling, experience over history, and clinical knowledge over theories. two internal mechanisms behind our blindness to Black Swans, the confirmation bias and the narrative fallacy. Let me distill the main idea behind what researchers call hedonic happiness. Making 1millioninoneyear,butnothingintheprecedingnine,doesnotbringthesamepleasureashavingthetotalevenlydistributedoverthesameperiod,thatis,1 million in one year, but nothing in the preceding nine, does not bring the same pleasure as having the total evenly distributed over the same period, that is, 100,000 every year for ten years in a row. As a matter of fact, your happiness depends far more on the number of instances of positive feelings, what psychologists call “positive affect,” One of the attributes of a Black Swan is an asymmetry in consequences—either positive or negative. Diagoras asked, “Where were the pictures of those who prayed, then drowned?” The drowned worshippers, being dead, would have a lot of trouble advertising their experiences from the bottom of the sea. This can fool the casual observer into believing in miracles. We call this the problem of silent evidence. The idea is simple, yet potent and universal. The graveyard of failed persons will be full of people who shared the following traits: courage, risk taking, optimism, et cetera. There may be some differences in skills, but what truly separates the two is for the most part a single factor: luck. Plain luck. This statement is actually empirically true: researchers confirm that gamblers have lucky beginnings (the same applies to stock market speculators). Does this mean that each one of us should become a gambler for a while, take advantage of lady luck’s friendliness to beginners, then stop? Those who continue gambling will remember having been lucky as beginners. The dropouts, by definition, will no longer be part of the surviving gamblers’ community. This explains beginner’s luck. But, in his political campaign a few years ago, even he forgot to trumpet the tens of thousands of lives saved by his seat belt laws. It is much easier to sell “Look what I did for you” than “Look what I avoided for you.” A life saved is a statistic; a person hurt is an anecdote. Statistics are invisible; anecdotes are salient. This brings us to gravest of all manifestations of silent evidence, the illusion of stability. The bias lowers our perception of the risks we incurred in the past, particularly for those of us who were lucky to have survived them. The reference point argument is as follows: do not compute odds from the vantage point of the winning gambler I repeat that we are explanation-seeking animals who tend to think that everything has an identifiable cause and grab the most apparent one as the explanation. ludic fallacy—the attributes of the uncertainty we face in real life have little connection to the sterilized ones we encounter in exams and games. Ludic comes from ludus, Latin for games. In real life you do not know the odds; you need to discover them, and the sources of uncertainty are not defined. All this, and yet the rest of the world still learns about uncertainty and probability from gambling examples. It is why we do not see Black Swans: we worry about those that happened, not those that may happen but did not. “It is tough to make predictions, especially about the future.” Epistemic arrogance bears a double effect: we overestimate what we know, and underestimate uncertainty, by compressing the range of possible uncertain states We attribute our successes to our skills, and our failures to external events outside our control, namely to randomness. We feel responsible for the good stuff, but not for the bad. Prediction requires knowing about technologies that will be discovered in the future. But that very knowledge would almost automatically allow us to start developing those technologies right away. Ergo, we do not know what we will know. I would not be the first to say that this optimization set back social science by reducing it from the intellectual and reflective discipline that it was becoming to an attempt at an “exact science.” Tolstoy said that happy families were all alike, while each unhappy one is unhappy in its own way. People have been shown to make errors equivalent to preferring apples to oranges, oranges to pears, and pears to apples, depending on how the relevant questions are presented to them. Always remember that “R-square” is unfit for Extremistan; it is only good for academic promotion. There is a blind spot: when we think of tomorrow we do not frame it in terms of what we thought about yesterday on the day before yesterday. The first direction, from the ice cube to the puddle, is called the forward process. The second direction, the backward process, is much, much more complicated. Randomness, in the end, is just unknowledge. The world is opaque and appearances fool us. Instead of having medium risk, you have high risk on one side and no risk on the other. The average will be medium risk but constitutes a positive exposure to the Black Swan. This dovetails into the “barbell” strategy of taking maximum exposure to the positive Black Swans while remaining paranoid about the negative ones. Seize any opportunity, or anything that looks like opportunity. They are rare, much rarer than you think. All these recommendations have one point in common: asymmetry. Put yourself in situations where favorable consequences are much larger than unfavorable ones. Say you need past data to discover whether a probability distribution is Gaussian, fractal, or something else. You will need to establish whether you have enough data to back up your claim. How do we know if we have enough data? From the probability distribution—a distribution does tell you whether you have enough data to “build confidence” about what you are inferring. If it is a Gaussian bell curve, then a few points will suffice (the law of large numbers once again). And how do you know if the distribution is Gaussian? Well, from the data. So we need the data to tell us what the probability distribution is, and a probability distribution to tell us how much data we need. This causes a severe regress argument. Fractal randomness is a way to reduce these surprises, to make some of the swans appear possible, so to speak, to make us aware of their consequences, to make them gray. But fractal randomness does not yield precise answers. The entire statistical business confused absence of proof with proof of absence. you need one single observation to reject the Gaussian, but millions of observations will not fully confirm the validity of its application. I want to be broadly right rather than precisely wrong.

✅ Actionable item

  • Apply this kind of thinking in work, but it is hard to implement in reality because at the end there doesn't seem to be a good way of predicting black swans...which makes them black swans

🗂 Detailed Summary